This is a talk given at the Redwood Center for Theoretical Neuroscience, UC Berkeley on April 17, 2007. Speaker is Steve Waydo from the California Institute of Technology.
Neurons have been identified in the human medial temporal lobe (MTL) that display a strong selectivity for only a few stimuli (such as familiar individuals or landmark buildings) out of perhaps 100 presented to the test subject (Quian Quiroga et al., Nature 2005). While highly selective for a particular object or category, these cells are remarkably insensitive to different presentations (i.e. different poses and views) of their preferred stimulus. This invariant, sparse, and explicit representation of the world may be crucial to the transformation of complex visual stimuli into more abstract memories. We first discuss the issue of how best to quantify sparseness, particularly in very sparse systems where biases are significant, and show the results of this analysis applied to human MTL data (Waydo et al., J. Neurosci. 2006). From there we move into the computational realm. Sparse coding as a computational constraint applied to the representation of natural images has been shown to produce receptive fields strikingly similar to those observed in mammalian primary visual cortex (Olshausen & Field, 1996, 1997). We apply sparse coding further along the visual hierarchy: not directly to images but rather to an invariant feature-based representation of images analogous to that found in inferotemporal cortex. This combination of sparseness and invariance naturally leads to explicit category representation. That is, by exposing the model to different images drawn from different categories, we develop units that respond selectively to different categories. We will show results of applying this method both to unsupervised image category discovery and differentiation between images of different individuals.